Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available July 18, 2026
-
Free, publicly-accessible full text available April 27, 2026
-
Free, publicly-accessible full text available November 1, 2025
-
We examine a novel setting in which two parties have partial knowledge of the elements that make up a Markov Decision Process (MDP) and must cooperate to compute and execute an optimal policy for the problem constructed from those elements. This situation arises when one party wants to give a robot some task, but does not wish to divulge those details to a second party-while the second party possesses sensitive data about the robot's dynamics (information needed for planning). Both parties want the robot to perform the task successfully, but neither is willing to disclose any more information than is absolutely necessary. We utilize techniques from secure multi-party computation, combining primitives and algorithms to construct protocols that can compute an optimal policy while ensuring that the policy remains opaque by being split across both parties. To execute a split policy, we also give a protocol that enables the robot to determine what actions to trigger, while the second party guards against attempts to probe for information inconsistent with the policy's prescribed execution. In order to improve scalability, we find that basis functions and constraint sampling methods are useful in forming effective approximate MDPs. We report simulation results examining performance and precision, and assess the scaling properties of our Python implementation. We also describe a hardware proof-of-feasibility implementation using inexpensive physical robots, which, being a small-scale instance, can be solved directly.more » « less
-
We present an incremental scalable motion planning algorithm for finding maximally informative trajectories for decentralized mobile robots. These robots are deployed to observe an unknown spatial field, where the informativeness of observations is specified as a density function. Existing works that are typically restricted to discrete domains and synchronous planning often scale poorly depending on the size of the problem. Our goal is to design a distributed control law in continuous domains and an asynchronous communication strategy to guide a team of cooperative robots to visit the most informative locations within a limited mission duration. Our proposed Asynchronous Information Gathering with Bayesian Optimization (AsyncIGBO) algorithm extends ideas from asynchronous Bayesian Optimization (BO) to efficiently sample from a density function. It then combines them with decentralized reactive motion planning techniques to achieve efficient multi-robot information gathering activities. We provide a theoretical justification for our algorithm by deriving an asymptotic no-regret analysis with respect to a known spatial field. Our proposed algorithm is extensively validated through simulation and real-world experiment results with multiple robots.more » « less
An official website of the United States government
